33 research outputs found

    Effects of Sensemaking Translucence on Distributed Collaborative Analysis

    Full text link
    Collaborative sensemaking requires that analysts share their information and insights with each other, but this process of sharing runs the risks of prematurely focusing the investigation on specific suspects. To address this tension, we propose and test an interface for collaborative crime analysis that aims to make analysts more aware of their sensemaking processes. We compare our sensemaking translucence interface to a standard interface without special sensemaking features in a controlled laboratory study. We found that the sensemaking translucence interface significantly improved clue finding and crime solving performance, but that analysts rated the interface lower on subjective measures than the standard interface. We conclude that designing for distributed sensemaking requires balancing task performance vs. user experience and real-time information sharing vs. data accuracy.Comment: ACM SIGCHI CSCW 201

    `It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values

    Full text link
    Recently, the AI/ML research community has indicated an urgent need to establish Responsible AI (RAI) values and practices as part of the AI/ML lifecycle. Several organizations and communities are responding to this call by sharing RAI guidelines. However, there are gaps in awareness, deliberation, and execution of such practices for multi-disciplinary ML practitioners. This work contributes to the discussion by unpacking co-production challenges faced by practitioners as they align their RAI values. We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms and found that both top-down and bottom-up institutional structures create burden for different roles preventing them from upholding RAI values, a challenge that is further exacerbated when executing conflicted values. We share multiple value levers used as strategies by the practitioners to resolve their challenges. We end our paper with recommendations for inclusive and equitable RAI value-practices, creating supportive organizational structures and opportunities to further aid practitioners

    Macro-modeling and energy efficiency studies of file management in embedded systems with flash memory

    Get PDF
    Technological advancements in computer hardware and software have made embedded systems highly affordable and widely used. Consumers have ever increasing demands for powerful embedded devices such as cell phones, PDAs and media players. Such complex and feature-rich embedded devices are strictly limited by their battery life- time. Embedded systems typically are diskless and use flash for secondary storage due to their low power, persistent storage and small form factor needs. The energy efficiency of a processor and flash in an embedded system heavily depends on the choice of file system in use. To address this problem, it is necessary to provide sys- tem developers with energy profiles of file system activities and energy efficient file systems. In the first part of the thesis, a macro-model for the CRAMFS file system is established which characterizes the processor and flash energy consumption due to file system calls. This macro-model allows a system developer to estimate the energy consumed by CRAMFS without using an actual power setup. The second part of the thesis examines the effects of using non-volatile memory as a write-behind buffer to improve the energy efficiency of JFFS2. Experimental results show that a 4KB write-behind buffer significantly reduces energy consumption by up to 2-3 times for consecutive small writes. In addition, the write-behind buffer conserves flash space since transient data may never be written to flash

    Development of Automated Purchase System for NIT Rourkela

    Get PDF
    Procurement of Goods and services at NIT Rourkela is a thorough and long process and is currently done manually. It involves a large number of activities including fund allocation to sub-heads, enlistment of Goods and Services, registration of Suppliers etc. The purchase process itself can be done in four different ways - Direct Purchase, Advertised Tender Enquiry, Limited Tender Enquiry and Single Tender Enquiry. Also, the entire process requires the approval of a number of staffs at various stages. For this project, this entire process needs to be automated in the form of a web-based application. The application would not only make it easy for the staffs and faculties to carry out the purchase process by reducing their workload and lessening the time delays in the process, but also ensure complete transparency. The application would consist of seven different modules for Managing login, Managing Chart of Accounts (Account Categories, Heads, Sub-Heads, Funds, Goods/Services and Suppliers), Managing Direct Purchase, Managing Advertised Tender Enquiries, Managing Limited Tender Enquiries, Managing Single Tender Enquiries and Managing Purchase Requisitions

    SPRING: speech and pronunciation improvement through games, for Hispanic children

    Full text link
    Lack of proper English pronunciations is a major problem for immigrant population in developed countries like U.S. This poses various problems, including a barrier to entry into mainstream society. This paper presents a research study that explores the use of speech technologies merged with activity-based and arcade-based games to do pronunciation feedback for Hispanic children within the U.S. A 3-month long study with immigrant population in California was used to investigate and analyze the effectiveness of computer aided pronunciation feedback through games. In addition to quantitative findings that point to statistically significant gains in pronunciation quality, the paper also explores qualitative findings, interaction patterns and challenges faced by the researchers in dealing with this community. It also describes the issues involved in dealing with pronunciation as a competency.Comment: ACM ICTD 201

    ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles

    Full text link
    Large language model (LLM) prompting is a promising new approach for users to create and customize their own chatbots. However, current methods for steering a chatbot's outputs, such as prompt engineering and fine-tuning, do not support users in converting their natural feedback on the model's outputs to changes in the prompt or model. In this work, we explore how to enable users to interactively refine model outputs through their feedback, by helping them convert their feedback into a set of principles (i.e. a constitution) that dictate the model's behavior. From a formative study, we (1) found that users needed support converting their feedback into principles for the chatbot and (2) classified the different principle types desired by users. Inspired by these findings, we developed ConstitutionMaker, an interactive tool for converting user feedback into principles, to steer LLM-based chatbots. With ConstitutionMaker, users can provide either positive or negative feedback in natural language, select auto-generated feedback, or rewrite the chatbot's response; each mode of feedback automatically generates a principle that is inserted into the chatbot's prompt. In a user study with 14 participants, we compare ConstitutionMaker to an ablated version, where users write their own principles. With ConstitutionMaker, participants felt that their principles could better guide the chatbot, that they could more easily convert their feedback into principles, and that they could write principles more efficiently, with less mental demand. ConstitutionMaker helped users identify ways to improve the chatbot, formulate their intuitive responses to the model into feedback, and convert this feedback into specific and clear principles. Together, these findings inform future tools that support the interactive critiquing of LLM outputs

    Diagnostic Utility of TTF-1 and P40 Immunohistochemical Markers for Subtyping of Non-Small Cell Lung Carcinoma

    Get PDF
    Background : Lung cancer is the leading cause of cancer-related mortality over worldwide. Although the pathological diagnosis of lung carcinoma is limited as only small specimen available for diagnosis, the availability of targeted therapies has created a need for precise subtyping of non-small cell lung carcinoma. Several recent studies have demonstrated that the use of immunohistochemical markers can be helpful in differentiating squamous cell carcinoma from adenocarcinoma not only on surgically resected specimen but also on small biopsy samples. Material and Methods: A cross-sectional study of one year duration including 50 cases of lung carcinomas on guided biopsies were first reported on Haematoxylin and Eosin sections and later subjected for IHC using relevant markers TTF-1 and p40. Results: In our study IHC with TTF-1 and p40 aided in subtyping of 35 (92.1%) cases of non-small cell lung carcinoma and this diagnostic accuracy was found to be statistically significant with p value <0.001. On statistical analysis, p40 showed 100% sensitivity and 85.7% specificity for squamous differentiation whereas TTF-1 showed sensitivity of 85.7% and specificity of 100% for adenocarcinoma. Out of 50 cases, after IHC, 29 (58%) were diagnosed as squamous cell carcinoma, 18 (36%) as adenocarcinoma, 3 (6%) as non-small cell lung carcinoma. Conclusion: The minimalist IHC based model of p40 and TTF-1 on biopsy samples were effective to correctly subtype most cases of non-small cell lung carcinoma and contribute in sparing material for molecular testing. Keywords: Non-small cell lung carcinoma, immunohistochemistry, squamous cell carcinom

    Massively distributed authorship of academic papers

    Get PDF
    Wiki-like or crowdsourcing models of collaboration can provide a number of benefits to academic work. These techniques may engage expertise from different disciplines, and potentially increase productivity. This paper presents a model of massively distributed collaborative authorship of academic papers. This model, developed by a collective of thirty authors, identifies key tools and techniques that would be necessary or useful to the writing process. The process of collaboratively writing this paper was used to discover, negotiate, and document issues in massively authored scholarship. Our work provides the first extensive discussion of the experiential aspects of large-scale collaborative research.Peer ReviewedPostprint (author's final draft
    corecore